This notebook takes you through a point detection example

This example requires a decent workstation or VM to run on, 8GB memory +)

Sequence:

  1. Query the Allen Institute data api for a list of images to work with
  2. Download each image at the appropriate level of downsampling
  3. Run each image through an RGB filter
  4. Detect each point within a certain size range
  5. Save the list of points to a file

First, we need to set the path to the python modules


In [1]:
import sys, os
print 'Working in %s' % os.path.abspath(os.path.curdir)

# adding path to python modules
sys.path.append('../src/python')


Working in /vagrant/notebooks

We import the aibs module, a small wrapper created to query parts of the Allen Institute Data api api.brain-map.org easily


In [2]:
import aibs;
reload(aibs);
api = aibs.api()

The specimen name and marker list can be found by browsing human.brain-map.org


In [3]:
# list of all experiments belonging to this specimen name
explist = api.getValidSpecimentsWithName('H08-0083.01')

# we want the first
e = explist[0]

# we then specify which markers to filter by
e.markersOfInterest = ['PCP4']

# and filter the available marker list
e.getMarkerList(verbose=False)

# finally, we query the api for the list of images that match our search criteria
e.getSectionImages()

# just confirming the name
print e.subjectName


H08-0083.01

We then import the processing module. This module was written to wrap various shell scripts & command line commands initially, but has since been expanded to include image processing steps itself, implemented in scikit image and numpy.


In [4]:
import pmip;
reload(pmip); # in case any development has occured since last import

# we create an instance of the class, passing it the experiment we defined above
pe = pmip.Processing(e)

# initializing the environment then creates the necessary directories for derived data to go
pe.initEnv();

# this is a utility command to see the total file counts in each directory
pe.listSubjectDirectory();


* initEnv
--------------------------------------------------------------------------------
found   : /data/reconstruction
found   : /data/reconstruction/specimens
directories for H08-0083_01 created
[0 files] /data/reconstruction/specimens/H08-0083_01/detect_points
[10 files] /data/reconstruction/specimens/H08-0083_01/detect_raw
[20 files] /data/reconstruction/specimens/H08-0083_01/register_contrast
[0 files] /data/reconstruction/specimens/H08-0083_01/register_density
[0 files] /data/reconstruction/specimens/H08-0083_01/register_points
[10 files] /data/reconstruction/specimens/H08-0083_01/register_raw
[10 files] /data/reconstruction/specimens/H08-0083_01/register_source
[0 files] /data/reconstruction/specimens/H08-0083_01/register_stack
[28 files] /data/reconstruction/specimens/H08-0083_01/register_target
[2 files] /data/reconstruction/specimens/H08-0083_01/video

Collect the raw images needed for processing. For registration, we use images that have been downsampled 4 fold. For cell detection, we use images that have been downsampled 1 fold, e.g. 50% of native resolution.


In [5]:
pe.collectImagesForCellDetection()


* collectRaw, downsample by 2^1
--------------------------------------------------------------------------------
-> collecting images from remote source
http://api.brain-map.org/cgi-bin/imageservice?path=/external/aibssan/production30/prod1/0400061699/0400061699.aff&top=1280&left=320&width=11776&height=8448&downsample=1
http://api.brain-map.org/cgi-bin/imageservice?path=/external/aibssan/production30/prod1/0400061699/0400061699.aff&top=448&left=24128&width=12128&height=9312&downsample=1
http://api.brain-map.org/cgi-bin/imageservice?path=/external/aibssan/production30/prod1/0400061843/0400061843.aff&top=64&left=64&width=12128&height=10319&downsample=1
http://api.brain-map.org/cgi-bin/imageservice?path=/external/aibssan/production30/prod1/0400061843/0400061843.aff&top=128&left=24704&width=13024&height=10287&downsample=1
http://api.brain-map.org/cgi-bin/imageservice?path=/external/aibssan/production30/prod1/0400061987/0400061987.aff&top=64&left=512&width=12672&height=10687&downsample=1
http://api.brain-map.org/cgi-bin/imageservice?path=/external/aibssan/production30/prod1/0400061987/0400061987.aff&top=512&left=24640&width=12672&height=10432&downsample=1
http://api.brain-map.org/cgi-bin/imageservice?path=/external/aibssan/production30/prod1/0400062131/0400062131.aff&top=128&left=64&width=12768&height=10656&downsample=1
http://api.brain-map.org/cgi-bin/imageservice?path=/external/aibssan/production30/prod1/0400062131/0400062131.aff&top=64&left=25664&width=12655&height=10496&downsample=1
http://api.brain-map.org/cgi-bin/imageservice?path=/external/aibssan/production30/prod1/0400062275/0400062275.aff&top=0&left=64&width=12608&height=10015&downsample=1
http://api.brain-map.org/cgi-bin/imageservice?path=/external/aibssan/production30/prod1/0400062275/0400062275.aff&top=448&left=24320&width=12832&height=9791&downsample=1

Previously, we would use ImageJ to do the point detection. This does not play nicely with a vagrant configuration without several workarounds.

If you decide to play with this route, you will need ~10GB of ram for the operation.


In [6]:
pe.extractPoints()


* extractPoints
--------------------------------------------------------------------------------
Executing ColorThresholdWithPointDetection.ijm on 10 files
/home/vagrant/Fiji.app/fiji-linux64 -Xms3000m --headless -batch /vagrant/src/fijimacro/ColorThresholdWithPointDetection.ijm /data/reconstruction/specimens/H08-0083_01/detect_raw/006-100034525-DSx1.jpg
/home/vagrant/Fiji.app/fiji-linux64 -Xms3000m --headless -batch /vagrant/src/fijimacro/ColorThresholdWithPointDetection.ijm /data/reconstruction/specimens/H08-0083_01/detect_raw/036-100034525-DSx1.jpg
/home/vagrant/Fiji.app/fiji-linux64 -Xms3000m --headless -batch /vagrant/src/fijimacro/ColorThresholdWithPointDetection.ijm /data/reconstruction/specimens/H08-0083_01/detect_raw/066-100034525-DSx1.jpg
/home/vagrant/Fiji.app/fiji-linux64 -Xms3000m --headless -batch /vagrant/src/fijimacro/ColorThresholdWithPointDetection.ijm /data/reconstruction/specimens/H08-0083_01/detect_raw/096-100034525-DSx1.jpg
/home/vagrant/Fiji.app/fiji-linux64 -Xms3000m --headless -batch /vagrant/src/fijimacro/ColorThresholdWithPointDetection.ijm /data/reconstruction/specimens/H08-0083_01/detect_raw/126-100034525-DSx1.jpg
/home/vagrant/Fiji.app/fiji-linux64 -Xms3000m --headless -batch /vagrant/src/fijimacro/ColorThresholdWithPointDetection.ijm /data/reconstruction/specimens/H08-0083_01/detect_raw/156-100034525-DSx1.jpg
/home/vagrant/Fiji.app/fiji-linux64 -Xms3000m --headless -batch /vagrant/src/fijimacro/ColorThresholdWithPointDetection.ijm /data/reconstruction/specimens/H08-0083_01/detect_raw/186-100034525-DSx1.jpg
/home/vagrant/Fiji.app/fiji-linux64 -Xms3000m --headless -batch /vagrant/src/fijimacro/ColorThresholdWithPointDetection.ijm /data/reconstruction/specimens/H08-0083_01/detect_raw/216-100034525-DSx1.jpg
/home/vagrant/Fiji.app/fiji-linux64 -Xms3000m --headless -batch /vagrant/src/fijimacro/ColorThresholdWithPointDetection.ijm /data/reconstruction/specimens/H08-0083_01/detect_raw/246-100034525-DSx1.jpg
/home/vagrant/Fiji.app/fiji-linux64 -Xms3000m --headless -batch /vagrant/src/fijimacro/ColorThresholdWithPointDetection.ijm /data/reconstruction/specimens/H08-0083_01/detect_raw/276-100034525-DSx1.jpg

Instead, we've re-written the point detection in scikit-image.


In [7]:
import os
import skimage
import numpy as np
import scipy as sp
from scipy import ndimage
from skimage import color, filter

from skimage import measure
from scipy import signal
import glob
from skimage.transform import pyramids
import workerpool

def detectPoints(f):
    """ This function takes an image file, converts it to HSV, and locates puncta
    """
    
    print f
    f_a = os.path.join(pe.dirs['points'], os.path.basename(f) + '.area')
    f_c = os.path.join(pe.dirs['points'], os.path.basename(f) + '.centroid')            

    if not os.path.exists(f_a):

        im = ndimage.imread(f)
        imHSV = color.rgb2hsv(im)

        imsat = imHSV[:,:,1]
        satThreshold = np.zeros_like(imsat)
        satThreshold[imsat > 0.05] = 1

        fill_holes = ndimage.binary_fill_holes(satThreshold)
        remove_noise = ndimage.binary_opening(fill_holes, structure=np.ones((3,3))).astype(np.int)
        labeld_image, count = ndimage.label(remove_noise)
        regions = measure.regionprops(labeld_image, properties=['Area', 'Centroid'])

        a = []
        c = []

        for r in regions:
            a.append(r['Area'])
            c.append(r['Centroid'])


        np.savetxt(f_a, a)
        np.savetxt(f_c, c)

    dscImageList = glob.glob(os.path.join(pe.dirs['points'], '*.area'))
    dscImageList.sort()

    pe.processing_status['regpointa'] = dscImageList      

    dscImageList = glob.glob(os.path.join(pe.dirs['points'], '*.centroid'))
    dscImageList.sort()

    pe.processing_status['regpointc'] = dscImageList

Processing status is stored in pe.processing_status, with a list for each key of generated outputs


In [8]:
pe.processing_status


Out[8]:
{'detect': ['/data/reconstruction/specimens/H08-0083_01/detect_raw/006-100034525-DSx1.jpg',
  '/data/reconstruction/specimens/H08-0083_01/detect_raw/036-100034525-DSx1.jpg',
  '/data/reconstruction/specimens/H08-0083_01/detect_raw/066-100034525-DSx1.jpg',
  '/data/reconstruction/specimens/H08-0083_01/detect_raw/096-100034525-DSx1.jpg',
  '/data/reconstruction/specimens/H08-0083_01/detect_raw/126-100034525-DSx1.jpg',
  '/data/reconstruction/specimens/H08-0083_01/detect_raw/156-100034525-DSx1.jpg',
  '/data/reconstruction/specimens/H08-0083_01/detect_raw/186-100034525-DSx1.jpg',
  '/data/reconstruction/specimens/H08-0083_01/detect_raw/216-100034525-DSx1.jpg',
  '/data/reconstruction/specimens/H08-0083_01/detect_raw/246-100034525-DSx1.jpg',
  '/data/reconstruction/specimens/H08-0083_01/detect_raw/276-100034525-DSx1.jpg']}

In [9]:
pe.listSubjectDirectory()


[0 files] /data/reconstruction/specimens/H08-0083_01/detect_points
[10 files] /data/reconstruction/specimens/H08-0083_01/detect_raw
[20 files] /data/reconstruction/specimens/H08-0083_01/register_contrast
[0 files] /data/reconstruction/specimens/H08-0083_01/register_density
[0 files] /data/reconstruction/specimens/H08-0083_01/register_points
[10 files] /data/reconstruction/specimens/H08-0083_01/register_raw
[10 files] /data/reconstruction/specimens/H08-0083_01/register_source
[0 files] /data/reconstruction/specimens/H08-0083_01/register_stack
[28 files] /data/reconstruction/specimens/H08-0083_01/register_target
[2 files] /data/reconstruction/specimens/H08-0083_01/video

In [10]:
# we can pull the file list from processing_status
pe.processing_status.keys()


Out[10]:
['detect']

In [11]:
# Define a function to run through the images to be processed
def runimages():
    pe._printTitle('detectPoints')
    for img in pe.processing_status['detect']:
        detectPoints(img)

The following command will produce an out of memory error on the vagrant instance with < 8GB ram


In [1]:
# run it (this requires significant resources - your machine should have ~8+ GB of ram available)
# alternatively, implement a partitioning approach and convert partial chunks of the image from RGB->HSV 

# uncomment to test
# runimages()

After completing the point detection, we are left with a list of point centers and areas. In the final step, we transform each point by the transform created from the registration step.

Since this example won't run on the majority of laptops / workstations, we've included an example set of detected points in the detect_points folder


In [ ]: